Picture for Jiaming Ji

Jiaming Ji

Enhance the Safety in Reinforcement Learning by ADRC Lagrangian Methods

Add code
Jan 26, 2026
Viaarxiv icon

AgentDoG: A Diagnostic Guardrail Framework for AI Agent Safety and Security

Add code
Jan 26, 2026
Viaarxiv icon

VLA-Arena: An Open-Source Framework for Benchmarking Vision-Language-Action Models

Add code
Dec 27, 2025
Viaarxiv icon

Medical Reasoning in the Era of LLMs: A Systematic Review of Enhancement Techniques and Applications

Add code
Aug 01, 2025
Viaarxiv icon

A Game-Theoretic Negotiation Framework for Cross-Cultural Consensus in LLMs

Add code
Jun 16, 2025
Viaarxiv icon

LegalReasoner: Step-wised Verification-Correction for Legal Judgment Reasoning

Add code
Jun 09, 2025
Viaarxiv icon

SafeLawBench: Towards Safe Alignment of Large Language Models

Add code
Jun 07, 2025
Figure 1 for SafeLawBench: Towards Safe Alignment of Large Language Models
Figure 2 for SafeLawBench: Towards Safe Alignment of Large Language Models
Figure 3 for SafeLawBench: Towards Safe Alignment of Large Language Models
Figure 4 for SafeLawBench: Towards Safe Alignment of Large Language Models
Viaarxiv icon

FinMME: Benchmark Dataset for Financial Multi-Modal Reasoning Evaluation

Add code
May 30, 2025
Figure 1 for FinMME: Benchmark Dataset for Financial Multi-Modal Reasoning Evaluation
Figure 2 for FinMME: Benchmark Dataset for Financial Multi-Modal Reasoning Evaluation
Figure 3 for FinMME: Benchmark Dataset for Financial Multi-Modal Reasoning Evaluation
Figure 4 for FinMME: Benchmark Dataset for Financial Multi-Modal Reasoning Evaluation
Viaarxiv icon

InterMT: Multi-Turn Interleaved Preference Alignment with Human Feedback

Add code
May 29, 2025
Viaarxiv icon

The Mirage of Multimodality: Where Truth is Tested and Honesty Unravels

Add code
May 26, 2025
Viaarxiv icon